replicated softmax model
c4851e8e264415c4094e4e85b0baa7cc-Reviews.html
This paper considers automatic classification of unstructured social group activity videos. To bridge the semantic gap between low-level features and the class-labels, the authors adopt a latent topic model based on replicated softmax to extract topics as mid-level representations for video classification. The main idea of this paper is the integration of sparse Bayesian learning and replicated softmax, which leads to the proposed model referred to "relevance topic model (RTM)". In RTM, the discriminative topics and sparse classifier weights are learned jointly, and the authors proposes variational EM algorithm for model parameter estimation and inference. The authors test their algorithm on a benchmark dataset and demonstrate better performance compared to other supervised topic models and some baseline algorithms.
Modeling Documents with Deep Boltzmann Machines
Srivastava, Nitish, Salakhutdinov, Ruslan R, Hinton, Geoffrey E.
We introduce a Deep Boltzmann Machine model suitable for modeling and extracting latent semantic representations from a large unstructured collection of documents. We overcome the apparent difficulty of training a DBM with judicious parameter tying. This parameter tying enables an efficient pretraining algorithm and a state initialization scheme that aids inference. The model can be trained just as efficiently as a standard Restricted Boltzmann Machine. Our experiments show that the model assigns better log probability to unseen data than the Replicated Softmax model. Features extracted from our model outperform LDA, Replicated Softmax, and DocNADE models on document retrieval and document classification tasks.
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.91)
Replicated Softmax: an Undirected Topic Model
Hinton, Geoffrey E., Salakhutdinov, Ruslan R.
We show how to model documents as bags of words using family of two-layer, undirected graphical models. Each member of the family has the same number of binary hidden units but a different number of ``softmax visible units. All of the softmax units in all of the models in the family share the same weights to the binary hidden units. We describe efficient inference and learning procedures for such a family. Each member of the family models the probability distribution of documents of a specific length as a product of topic-specific distributions rather than as a mixture and this gives much better generalization than Latent Dirichlet Allocation for modeling the log probabilities of held-out documents. The low-dimensional topic vectors learned by the undirected family are also much better than LDA topic vectors for retrieving documents that are similar to a query document. The learned topics are more general than those found by LDA because precision is achieved by intersecting many general topics rather than by selecting a single precise topic to generate each word.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California (0.04)
- Asia > Middle East > Jordan (0.04)